On the heels of Duncan Epping’s blog article about ESX partitioning best practices comes my first look at the automatic partitioning in ESXi.
Before I dive into ESXi, allow me to provide a brief primer on the history of ESX partitioning as I see it. Once upon a time, long before the coming of ESXi and when there was only ESX, there existed great debates in books, whitepapers, knowledgebase articles, datacenters, forums, and blogs about how best to properly partition ESX. Careful planning up front meant better optimization, stability, and uptime of the ESX host, saving the company money from poor host performance, reduced downtime, and unplanned outages. The ESX administrator also slept better at night. What is the big debate? ESX administrators evolve from varying backgrounds where they dealt with a range of operating systems. Each administrator brings their best ideas, experiences, and nightmares the he or she would probably like to forget, to the table. With the ESX Service Console (Console Operating System or COS for short) based on a version of Red Hat, Linux and Unix administrators were natively the best equipped to carry on an intelligent conversation of Linux partitioning “Do’s and Don’ts”. However, ESX did add a few twists in how it used the COS and the file system. Taking into account the native behavior of Red Hat in addition to the ESX specific characteristics, partitioning best practices evolved. While not every administrator will agree on the exact size a given partition should be, a pattern in how ESX is properly partitioned is fairly evident, plus or minus the partition size variance that fits the personal taste of the administrator or perhaps company baseline policies or standards. ESX partitioning strategy was an art form; maybe something to brag about when getting your geek on in a circle of peers. I’ll go out on a limb here and say that no self respecting ESX administrator used the automatic partitioning ESX offered even though it might have saved a little time and in fact was and is still today the default partitioning choice during an ESX installation. No offense intended, but the automatic partitioning in ESX was less than stellar. In fact, automatic partitioning sizes were not consistent with best practices taught in the Virtual Infrastructure classroom training. Right there is a good a reason as any to partition manually.
Recently, I began evaluating ESXi. Having been a seasoned ESX administrator for quite a while, one of the things I noticed about the ESXi installation (in addition to its incredibly fast installation time) was the partitioning configuration. Or lack thereof. Had I blown through the screens so quickly that I didn’t notice and accidentally performed automatic partitioning? It was no accident. A second installation revealed that ESXi makes the partitioning choices for us, and provides us no opportunity to size our own partitions short of deleting and recreating after the fact using fdisk in the unsupported “Tech Support Mode” console. That’s nothing but trouble. The only partitions you should be creating after the ESX/ESXi installation are VMFS volumes.
So I’m stuck with automatic partitioning in ESXi. It’s a paradigm change I could get used to if it’s technically sound. But what does it look like and what are the partitions in ESXi used for? An unsupported trek into the “Tech Support Mode” console allowed me to run some df -h, fdisk -l, and ls commands to find out.
On an 8GB disk, an installation of ESXi 3.5.0 Update 2 creates the following partitions automatically:
Vmhba1:0:0:1 Extended
Vmhba1:0:0:2 FAT16 4,095MB Looks like /. contains directories: downloads, uwswap, var
Vmhba1:0:0:3 VMFS 3,347MB datastore for VMs, ISOs, VMKernel swap
Vmhba1:0:0:4 FAT16 <32M 4MB Unknown
Vmhba1:0:0:5 FAT16 48MB contains miscellaneous files (boot.cfg, *.tgz, etc.)
Vmhba1:0:0:6 FAT16 48MB contains miscellaneous files (boot.cfg, *.tgz, etc.)
Vmhba1:0:0:7 VMKcore 110MB allocated for VMkernel core dump
Vmhba1:0:0:8 FAT16 540MB contains directories: core (for COS dumps?), opt
Ok, some of the partition sizes above were quite small. Understandably, I started out with a small 8GB disk, not giving ESXi much to work with. My next question was “would the auto created partition sizes be larger if I used a larger disk?”. So I reinstalled using a 16GB disk to find out.
On a 16GB disk, an installation of ESXi 3.5.0 Update 2 creates the following partitions automatically:
Vmhba1:0:0:1 Extended
Vmhba1:0:0:2 FAT16 4,095MB Looks like /. contains directories: downloads, uwswap, var
Vmhba1:0:0:3 VMFS 11,539MB datastore for VMs, ISOs, VMKernel swap
Vmhba1:0:0:4 FAT16 <32M 4MB Unknown
Vmhba1:0:0:5 FAT16 48MB contains miscellaneous files (boot.cfg, *.tgz, etc.)
Vmhba1:0:0:6 FAT16 48MB contains miscellaneous files (boot.cfg, *.tgz, etc.)
Vmhba1:0:0:7 VMKcore 110MB allocated for VMKernel core dump
Vmhba1:0:0:8 FAT16 540MB contains directories: core (for COS dumps?), opt
Between the 8GB install and the 16GB install, not a single partition size change with the exception of VMFS. It simply grew the VMFS partition to consume the additional 8GB of disk.
With this automatic partitioning scheme, how heavily are each of the partitions utilized out of the box?
Vmhba1:0:0:2 FAT16 4,095MB 25% used
Vmhba1:0:0:3 VMFS 11,539MB n/a
Vmhba1:0:0:4 FAT16 <32M 4MB Unknown
Vmhba1:0:0:5 FAT16 48MB 0% used
Vmhba1:0:0:6 FAT16 48MB 76% used
Vmhba1:0:0:7 VMKcore 110MB 0% used (and let’s hope for the sake of uptime it stays this way)
Vmhba1:0:0:8 FAT16 540MB 32% used
Bottom line: Am I comfortable with this partitioning? Time will tell if it’s adequate. I’ll keep a watchful eye on the experiences of others on the VMTN forums. I’m not using ESXi in production or development right now. Further information gathering on ESXi may reveal there are other deployment methods for ESXi that allow more granular control of its installation parameters. For now, I’m in the evaluation stage to see if ESXi is the right fit. It certainly carries with it some attractive attributes but there are also things I must learn to let go of such as the Service Console and all of the utility it provides.
Jason, like the article – but we MUST move to letting the system partition the disks rather than having to rely on users doing it.
This would make our product easier to install, less expensive to manage in terms of Opex, and less error prone (not everyone understands partitioning), and requires less support calls (better for customers and VMware).
So perhaps some Art’s are better lost, or in this case their patterns should evolve into an automated program (in the installer) that embodies the best practices you apply by hand.
It let’s us all spend less time @ the console and more time doing more interesting things 🙂
BTW If you have a proven practice for partitioning ESX then post something on VIOPS! 🙂
Steve, I agree with your comments. If auto partitioning works for everyone and is reliable, I’m totally for it. It’s simply a change in the way we (I) are used to doing things and we (I) can adapt. Right now I have more anxiety in losing the COS in general. Much of what I know about methods in managing and tuning ESX retires with the COS. I need to learn and understand the new ways we manage ESXi in a sort of console-less world. In addition, 3rd party products on the market which I use aren’t supporting ESXi yet. Don’t get me wrong – from a high level, I think it’s the right direction and we (I) will get there.
Interesting examination. I think the real test will be what happens with larger (>1TB) disks. At Vmworld a couple years ago it was shown that the sweet spot for sizing VMFS paritions was somewhere around 512GB, obviously this will vary depending on how many VMs you’re running and their VMDK sizes… but my point is if you overtax the VMFS parition by putting too many VMs on it that could be bad and it the autopartitioning leaves you with one humongous VMFS parition, well…
Mark, ESXi can try all it likes to provide me with auto created VMFS volumes, but as you mentioned, we don’t want to create too many VMs or too much activity on one LUN, therefore, I prefer LUNs in the 500-800GB size. We can easily destroy VMFS volumes (with no VMs on them) and re-recreate VMFS volumes that are the appropriate size. As far as SAN is concerned, I don’t install ESX/ESXi with SAN LUNs presented. In the past, ESX could nuke the contents of pre-existing VMFS volumes that it found. I believe that has been remedied in current ESX builds but I want to test some more before I throw caution to the wind.
Jas
Hi, I read this post titled “titioning a lost art form in ESXi – boche.net – VMware Virtualization Evangelist” about a week ago, might have been last Saturday, and thought it was a good point. I’ve been trying for the last few days to find your site again but ended up finding it in Google using the keywords “mcse training”. Anyway, I’ve forgotten what I wanted to post last week but I will be returning regularly. Bookmarked the page.
Glad you were able to find the blog again reece!
Jas
I have been working on ESXi – Actually the free hypervisor has been an incentive to perform much need server consolidation. I am really impressed with the small footprint. I have been playing with the ESXi console – yes it does have one, a very basic one and it does support whole bunch of services – all of which are disabled by default. I have enabled ssh for the moment. What I was looking for was a way to install redhat package manager on the console but realized there was not enough space. Hence been looking for ways to repartition – any suggestion. Also are there any recommendations for scheduling backups of VMs on ESXi.
VMware Consolidated Backup (VCB) is an option. It’s part of the VMware Foundation Licensing. vRangerPRO from Vizioncore will also soon be an option (hopefully in the next release).
As far as other options, a recent VMTN forum post was made here: http://communities.vmware.com/thread/164134 Take a look at it and see if you can’t find a solution that might fit for you.
Jas
I’m with Stevie on this one… saying that an art has been lost because there are no partitioning options in ESXi is like saying the art of tweaking Windows or Linux for specific hardware is lost when you stick a hypervisor underneath it. Some arts are definitely better lost.
The reason partitioning layouts were important with the COS was because ppl wanted to treat the COS like a general purpose Linux OS (like the commenter Ahsan). Sticking agents in there, creating multiple local user accounts, installing RPM’s for this that and the other etc etc. This world is gone with ESXi, for bloody good reason – ESX is not a general purpose OS and should not be treated as such!
With ESXi, VMware has much more control over what ppl can do to it and hence can give us a partition configuration out of the box that has been designed by the people who wrote ESXi. It’s optimal, fit for purpose, and has one purpose only. Which is how it should be.
In my experience, a lot of ESX admins learned the “what” of optimal ESX partitioning without really understanding the “why”. Having a reasonable understanding of the “why”, I am not concerned at the lack of partitioning options in ESXi one bit.
Have you ever had issues with the Vmfs creation on the UI? The UI calls into the API and follows the Vmfs best practices, by default.
Any changes in all this with ESXi 4??
You said time will tell if existing ESXi disk partitioning is fine. Unfortunately, no. See the problem about ESXi Vsphere 4 and 4U1 disconnecting/connecting problem with VC. That’s sure problem with small / partition overfilling.
One issue I’ve noticed with the automatic partitions with ESXi is that it uses 1MB blocks which ends up limiting VM disk sizes to ~250GB unless you go in there and redo the vmfs partition.